Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Smart glasses have become more prevalent as they provide an increasing number of applications for users. They store various types of private information or can access it via connections established with other devices. Therefore, there is a growing need for user identification on smart glasses. In this paper, we introduce a low-power and minimally-obtrusive system called SonicID, designed to authenticate users on glasses. SonicID extracts unique biometric information from users by scanning their faces with ultrasonic waves and utilizes this information to distinguish between different users, powered by a customized binary classifier with the ResNet-18 architecture. SonicID can authenticate users by scanning their face for 0.06 seconds. A user study involving 40 participants confirms that SonicID achieves a true positive rate of 97.4%, a false positive rate of 4.3%, and a balanced accuracy of 96.6% using just 1 minute of training data collected for each new user. This performance is relatively consistent across different remounting sessions and days. Given this promising performance, we further discuss the potential applications of SonicID and methods to improve its performance in the future.more » « lessFree, publicly-accessible full text available November 21, 2025
-
We present ActSonic, an intelligent, low-power active acoustic sensing system integrated into eyeglasses that can recognize 27 different everyday activities (e.g., eating, drinking, toothbrushing) from inaudible acoustic waves around the body. It requires only a pair of miniature speakers and microphones mounted on each hinge of the eyeglasses to emit ultrasonic waves, creating an acoustic aura around the body. The acoustic signals are reflected based on the position and motion of various body parts, captured by the microphones, and analyzed by a customized self-supervised deep learning framework to infer the performed activities on a remote device such as a mobile phone or cloud server. ActSonic was evaluated in user studies with 19 participants across 19 households to track its efficacy in everyday activity recognition. Without requiring any training data from new users (leave-one-participant-out evaluation), ActSonic detected 27 activities, achieving an average F1-score of 86.6% in fully unconstrained scenarios and 93.4% in prompted settings at participants' homes.more » « lessFree, publicly-accessible full text available November 21, 2025
-
We present Ring-a-Pose, a single untethered ring that tracks continuous 3D hand poses. Located in the center of the hand, the ring emits an inaudible acoustic signal that each hand pose reflects differently. Ring-a-Pose imposes minimal obtrusions on the hand, unlike multi-ring or glove systems. It is not affected by the choice of clothing that may cover wrist-worn systems. In a series of three user studies with a total of 36 participants, we evaluate Ring-a-Pose's performance on pose tracking and micro-finger gesture recognition. Without collecting any training data from a user, Ring-a-Pose tracks continuous hand poses with a joint error of 14.1mm. The joint error decreases to 10.3mm for fine-tuned user-dependent models. Ring-a-Pose recognizes 7-class micro-gestures with a 90.60% and 99.27% accuracy for user-independent and user-dependent models, respectively. Furthermore, the ring exhibits promising performance when worn on any finger. Ring-a-Pose enables the future of smart rings to track and recognize hand poses using relatively low-power acoustic sensing.more » « lessFree, publicly-accessible full text available November 21, 2025
-
We present HPSpeech, a silent speech interface for commodity headphones. HPSpeech utilizes the existing speakers of the headphones to emit inaudible acoustic signals. The movements of the temporomandibular joint (TMJ) during speech modify the reflection pattern of these signals, which are captured by a microphone positioned inside the headphones. To evaluate the performance of HPSpeech, we tested it on two headphones with a total of 18 participants. The results demonstrated that HPSpeech successfully recognized 8 popular silent speech commands for controlling the music player with an accuracy over 90%. While our tests use modified commodity hardware (both with and without active noise cancellation), our results show that sensing the movement of the TMJ could be as simple as a firmware update for ANC headsets which already include a microphone inside the hear cup. This leaves us to believe that this technique has great potential for rapid deployment in the near future. We further discuss the challenges that need to be addressed before deploying HPSpeech at scale.more » « less
-
Abstract We present an analysis of a densely repeating sample of bursts from the first repeating fast radio burst, FRB 121102. We reanalyzed the data used by Gourdji et al. and detected 93 additional bursts using our single-pulse search pipeline. In total, we detected 133 bursts in three hours of data at a center frequency of 1.4 GHz using the Arecibo telescope, and develop robust modeling strategies to constrain the spectro-temporal properties of all of the bursts in the sample. Most of the burst profiles show a scattering tail, and burst spectra are well modeled by a Gaussian with a median width of 230 MHz. We find a lack of emission below 1300 MHz, consistent with previous studies of FRB 121102. We also find that the peak of the log-normal distribution of wait times decreases from 207 to 75 s using our larger sample of bursts, as compared to that of Gourdji et al. Our observations do not favor either Poissonian or Weibull distributions for the burst rate distribution. We searched for periodicity in the bursts using multiple techniques, but did not detect any significant period. The cumulative burst energy distribution exhibits a broken power-law shape, with the lower- and higher-energy slopes of −0.4 ± 0.1 and −1.8 ± 0.2, with the break at (2.3 ± 0.2) × 10 37 erg. We provide our burst fitting routines as a Python package burstfit 4 4 https://github.com/thepetabyteproject/burstfit that can be used to model the spectrogram of any complex fast radio burst or pulsar pulse using robust fitting techniques. All of the other analysis scripts and results are publicly available. 5 5 https://github.com/thepetabyteproject/FRB121102more » « less
-
ABSTRACT With the upcoming commensal surveys for Fast Radio Bursts (FRBs), and their high candidate rate, usage of machine learning algorithms for candidate classification is a necessity. Such algorithms will also play a pivotal role in sending real-time triggers for prompt follow-ups with other instruments. In this paper, we have used the technique of Transfer Learning to train the state-of-the-art deep neural networks for classification of FRB and Radio Frequency Interference (RFI) candidates. These are convolutional neural networks which work on radio frequency-time and dispersion measure-time images as the inputs. We trained these networks using simulated FRBs and real RFI candidates from telescopes at the Green Bank Observatory. We present 11 deep learning models, each with an accuracy and recall above 99.5 per cent on our test data set comprising of real RFI and pulsar candidates. As we demonstrate, these algorithms are telescope and frequency agnostic and are able to detect all FRBs with signal-to-noise ratios above 10 in ASKAP and Parkes data. We also provide an open-source python package fetch (Fast Extragalactic Transient Candidate Hunter) for classification of candidates, using our models. Using fetch, these models can be deployed along with any commensal search pipeline for real-time candidate classification.more » « less
-
null (Ed.)ABSTRACT The origin of fast radio bursts (FRBs) still remains a mystery, even with the increased number of discoveries in the last 3 yr. Growing evidence suggests that some FRBs may originate from magnetars. Large, single-dish telescopes such as Arecibo Observatory (AO) and Green Bank Telescope (GBT) have the sensitivity to detect FRB 121102-like bursts at gigaparsec distances. Here, we present searches using AO and GBT that aimed to find potential radio bursts at 11 sites of past gamma-ray bursts that show evidence for the birth of a magnetar. We also performed a search towards GW170817, which has a merger remnant whose nature remains uncertain. We place $$10\sigma$$ fluence upper limits of ≈0.036 Jy ms at 1.4 GHz and ≈0.063 Jy ms at 4.5 GHz for the AO data and fluence upper limits of ≈0.085 Jy ms at 1.4 GHz and ≈0.098 Jy ms at 1.9 GHz for the GBT data, for a maximum pulse width of ≈42 ms. The AO observations had sufficient sensitivity to detect any FRB of similar luminosity to the one recently detected from the Galactic magnetar SGR 1935+2154. Assuming a Schechter function for the luminosity function of FRBs, we find that our non-detections favour a steep power-law index (α ≲ −1.1) and a large cut-off luminosity (L0 ≳ 1041 erg s−1).more » « less
An official website of the United States government
